Timing of Compositor

结论

一张图,先放结论再说过程吧,图解说明如下:

img

  • case 1:
    • 首先是发起wait_frame,其中会综合考量present_offset_ns + app_time_ns + now是否超过了T1的时间点
    • 在不超过的情况下,会修改predicted_display_time_ns为T1的时间点
    • predictted_display_time_ns减去present_offset_ns + app_time_ns,得到的值就是最迟需要开始做的时间点
    • wait_frame完成以后,需要根据上面计算出来的时刻做sleep
  • case 2:
    • Client端会发起xrWaitFrame的动作,这个动作可能发生在任意时刻。
    • Client端在尝试获取predicted_display_time_ns的时候,会计算
      • Client端本身cpu的耗时(逻辑业务)
      • Client端本身gpu渲染的耗时
      • 额外的一些开销(margin)
      • 上一次Compositor Thread做Compose(合成)的时候花费了多少时间
    • 在综合考虑了以上因素后,才会得到Client端的predicted_display_time_ns
    • 另外由于Compositor的部分是在Runtime进程做的,因此还会有一个delivery_time_ns,也就是在这个时刻Client必须要发送数据给到Runtime了
  • error case 1和error case 2:
    • 这两个case其实都是出现问题的情况了,都是往后顺延一个帧间间隔

综合来看,其实有几个时间是存在tuning的:

  • Compositor Thread:
    • present_offset_ns:现在来看有点类似于runtime端gpu需要运行的时间,Monado的代码设置的经验值为4ms
    • app_time_ns:runtime端cpu需要运行的时间,目前设置的时间为整个帧间间隔的20%
  • Client端(Application进程)
    • cpu_time_ns经验值2ms
    • draw_time_ns经验值2ms
    • margin_ns经验值2ms
    • diff time:这个值其实是从Compositor Thread那边拿到的,主要是估算整个Compositor的时间,因此app的内容需要视这个时间来做delivery。

xrWaitFrame

在timing的使用中,一切的起源就是xrWaitFrame,其实这个部分在spec上是有着一大段定义的。

Spec of xrWaitFrame

https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#xrWaitFrame

xrWaitFrame throttles the application frame loop in order to synchronize application frame submissions with the display.

  • 这一段只是介绍函数的功能,它是用来控制application提交节奏的。

xrWaitFrame returns a predicted display time for the next time that the runtime predicts a composited frame will be displayed. The runtime may affect this computation by changing the return values and throttling of xrWaitFrame in response to feedback from frame submission and completion times in xrEndFrame.

  • 这一段有几个信息:
    • xrWaitFrame会返回一个预测的显示时间(predicted display time),这个是一个未来的时间,也就是下一次合成后的图片什么时候会显示。
    • 这个返回的时间会收到上一次xrEndFrame的影响

An application must eventually match each xrWaitFrame call with one call to xrBeginFrame. A subsequent xrWaitFrame call must block until the previous frame has been begun with xrBeginFrame and must unblock independently of the corresponding call to xrEndFrame.

  • 这个也没什么特别的,针对xrWaitFramexrBeginFramexrEndFrame的一个调用顺序的说明

When less than one frame interval has passed since the previous return from xrWaitFrame, the runtime should block until the beginning of the next frame interval.If more than one frame interval has passed since the last return from xrWaitFrame, the runtime may return immediately or block until the beginning of the next frame interval.

  • 如果两次xrWaitFrame的时间间隔小于一帧,那么整个调用就要被block住,直到下一个帧开始的时候才会返回回来,这个是一个有很意思的点。代码实现中就是通过sleep来解决的,至于说如果两次xrWaitFrame大于了一帧的间隔,那么就看情况而定了。

In the case that an application has pipelined frame submissions, the application should compute the appropriate target display time using both the predicted display time and predicted display interval. The application should use the computed target display time when requesting space and view locations for rendering.

  • 因此,对于应用来说,应用需要获得未来的上屏时间predicted display time显示的帧间隔predicted display interval,从而使用预测的这个时间去跟OpenXR要space & view的数据,从而完成应用端的渲染,也就是实际应用未来的画面应该是什么样。

The XrFrameState::predictedDisplayTime returned by xrWaitFrame must be monotonically increasing.

  • 时间是单调递增的,实际代码中clock_gettime(CLOCK_MONOTONIC, &ts);,也就是从系统开机开始的时间

The runtime may dynamically adjust the start time of the frame interval relative to the display hardware’s refresh cycle to minimize graphics processor contention between the application and the compositor.

  • 帧率是可调的,所以帧间隔也是会变的,runtime的部分是要动态调整的。

xrWaitFrame must be callable from any thread, including a different thread than xrBeginFrame/xrEndFrame are being called from.

Calling xrWaitFrame must be externally synchronized by the application, concurrent calls may result in undefined behavior.

The runtime must return XR_ERROR_SESSION_NOT_RUNNING if the session is not running.

  • 这一段也没什么可以额外说的,无非就是线程无关,外部同步。

总结一下,xrWaitFrame其实是给出未来的上屏时间predicted display time显示的帧间隔predicted display interval

Out Params:XrFrameState

那么这两个参数是怎么得到的,根据Spec的定义,其实是从XrFrameState拿到的,继续看Spec中关于XrFrameState的说明:

XrFrameState describes the time at which the next frame will be displayed to the user. predictedDisplayTime must refer to the midpoint of the interval during which the frame is displayed. The runtime may report a different predictedDisplayPeriod from the hardware’s refresh cycle.

  • predictedDisplayTime整个帧在显示过程中的中点位置的时间
  • predictedDisplayPeriod需要根据硬件的刷新率改变而改变

For any frame where shouldRender is XR_FALSE, the application should avoid heavy GPU work for that frame, for example by not rendering its layers. This typically happens when the application is transitioning into or out of a running session, or when some system UI is fully covering the application at the moment. As long as the session is running, the application should keep running the frame loop to maintain the frame synchronization to the runtime, even if this requires calling xrEndFrame with all layers omitted.

  • 解释了shouldRender的作用,不做展开了

代码实现

Overview

先上图,再整码,整个Compositor其实是比较绕的一个套逻辑,希望这篇文章可以讲明白其中的道道。

image-20220622103428474

几种数据结构

Monado方案下的数据结构比较绕,主要是分为两块:

  • 应用程序进程中,也就是Client的:
    • multi_compositor
    • u_pacing_app
  • Runtime进程中,Compositor Thread的:
    • multi_system_compositor
    • comp_compositor
    • display_timing
    • fake_timing

其中multi_compositorcomp_compositor都是属于xrt_compositor_native的子类,所以它们其实是根正苗红的compositor

multi_systemcompositor其实是xrt_system_compositor的子类,其实并不是做compositor的事情。

所以我们需要明白,在Client端实际负责各种xrWaitFrame等等动作的,最终是跑到multi_compositor上,而在Compositor Thread上真正工作的是comp_compositor

Compositor Thread

整个代码的入口,我选择从Compositor Thread开始,其中的原因是因为Compositor Thread的初始化是比较早的,这个在《Monaod Out Of Process》流程分析中其实已经有说到了,这边就不在做展开了(nativeStartServer):

初始化

在Compositor Thread中,我们用到的数据结构为:multi_system_compositor,其中初始化的部分:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
monado\src\xrt\compositor\multi\comp_multi_system.c
xrt_result_t
comp_multi_create_system_compositor(struct xrt_compositor_native *xcn,
const struct xrt_system_compositor_info *xsci,
struct xrt_system_compositor **out_xsysc)
{
struct multi_system_compositor *msc = U_TYPED_CALLOC(struct multi_system_compositor);
......
msc->xcn = xcn;
//! @todo Make the clients not go from IDLE to READY before we have completed a first frame.
// Make sure there is at least some sort of valid frame data here.
msc->last_timings.predicted_display_time_ns = os_monotonic_get_ns(); // As good as any time.
msc->last_timings.predicted_display_period_ns = U_TIME_1MS_IN_NS * 16; // Just a wild guess.
msc->last_timings.diff_ns = U_TIME_1MS_IN_NS * 5; // Make sure it's not zero at least.
......
os_thread_helper_start(&msc->oth, thread_func, msc);
......
return XRT_SUCCESS;
}

其中有4个参数需要关注一下:

  • msc->xcn是传入的参数struct xrt_compositor_native *xcn,它是谁?
  • 初始化的三个参数:
    • msc->last_timings.predicted_display_time_ns = os_monotonic_get_ns();,直接为当前时间
    • msc->last_timings.predicted_display_period_ns = U_TIME_1MS_IN_NS * 16;,假定当前的显示周期为16毫秒
    • msc->last_timings.diff_ns = U_TIME_1MS_IN_NS * 5;,这个diff时间为5毫秒

带着第一个问题,struct xrt_compositor_native *xcn它是谁,我们看一下调用方:

1
2
3
4
5
6
7
8
9
monado\src\xrt\compositor\main\comp_compositor.c
xrt_result_t
xrt_gfx_provider_create_system(struct xrt_device *xdev, struct xrt_system_compositor **out_xsysc)
{
struct comp_compositor *c = U_TYPED_CALLOC(struct comp_compositor);
....
return comp_multi_create_system_compositor(&c->base.base, sys_info, out_xsysc);
}
  • 所以结论:msc->xcn实际是:comp_compositor

更新流程

整个Compositor thread的工作流程,还是非常清晰的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
monado\src\xrt\compositor\multi\comp_multi_system.c
static int
multi_main_loop(struct multi_system_compositor *msc)
{
......
//1. 这里实际xc就是comp_compositor
struct xrt_compositor *xc = &msc->xcn->base;
......
os_thread_helper_lock(&msc->oth);
while (os_thread_helper_is_running_locked(&msc->oth)) {
os_thread_helper_unlock(&msc->oth);
int64_t frame_id;
uint64_t wake_time_ns = 0;
uint64_t predicted_gpu_time_ns = 0;
uint64_t predicted_display_time_ns = 0;
uint64_t predicted_display_period_ns = 0;
//2. 获取具体的时间
wait_frame( //
xc, //
&frame_id, //
&wake_time_ns, //
&predicted_gpu_time_ns, //
&predicted_display_time_ns, //
&predicted_display_period_ns); //
uint64_t now_ns = os_monotonic_get_ns();
uint64_t diff_ns = predicted_display_time_ns - now_ns;
//3. 更新到client端
broadcast_timings(msc, predicted_display_time_ns, predicted_display_period_ns, diff_ns);
......
os_thread_helper_lock(&msc->oth);
}
......
return 0;
}

由于我们关注点是timing的更新,可以看到在Compositor Thread中这个部分其实就是wait_frame的工作,继续结合代码往下走:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
monado\src\xrt\compositor\multi\comp_multi_system.c
static void
wait_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
......
int64_t frame_id = -1;
uint64_t wake_up_time_ns = 0;
//1. 实际会调用xrt_comp_predict_frame,所以这边其实是一个封装
xrt_comp_predict_frame( //
xc, //
&frame_id, //
&wake_up_time_ns, //
out_predicted_gpu_time_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns); //
uint64_t now_ns = os_monotonic_get_ns();
//2. 如果需要wake up的时间比当前的时间晚,那么需要做一个sleep。
// 所以在这边可以看到compositor是在控制frame的上屏时间。
if (now_ns < wake_up_time_ns) {
os_nanosleep(wake_up_time_ns - now_ns);
}
now_ns = os_monotonic_get_ns();
......
*out_frame_id = frame_id;
*out_wake_time_ns = wake_up_time_ns;
}

在继续深挖一下,xrt_comp_predict_frame是一个monado中常见的 wrap手段:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
monado\src\xrt\include\xrt\xrt_compositor.h
//在这之前,我们其实已经知道xc其实就是comp_compositor,而
// comp_compositor初始化时:c->base.base.base.predict_frame = compositor_predict_frame;
static inline xrt_result_t
xrt_comp_predict_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
return xc->predict_frame( //
xc, //
out_frame_id, //
out_wake_time_ns, //
out_predicted_gpu_time_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns); //
}

所以实际调用的:compositor_predict_frame

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
monado\src\xrt\compositor\main\comp_compositor.c
static xrt_result_t
compositor_predict_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
struct comp_compositor *c = comp_compositor(xc);
// A little bit easier to read.
//1.这里的default framerate可以参考:comp_settings_init
// 其中定义:DEBUG_GET_ONCE_NUM_OPTION(default_framerate, "XRT_COMPOSITOR_DEFAULT_FRAMERATE", 60)
// int default_framerate = debug_get_num_option_default_framerate();
// 因此默认的,其实就是60帧,当帧率发生改变的时候,这个值其实也是要跟着变的,这边我们记录
// interval = 16.6ms
uint64_t interval_ns = (int64_t)c->settings.nominal_frame_interval_ns;
......
int64_t frame_id = -1;
uint64_t wake_up_time_ns = 0;
uint64_t present_slop_ns = 0;
uint64_t desired_present_time_ns = 0;
uint64_t predicted_display_time_ns = 0;
comp_target_calc_frame_timings( //
c->target, //
&frame_id, //
&wake_up_time_ns, //
&desired_present_time_ns, //
&present_slop_ns, //
&predicted_display_time_ns); //
......
c->frame.waited.id = frame_id;
c->frame.waited.desired_present_time_ns = desired_present_time_ns;
c->frame.waited.present_slop_ns = present_slop_ns;
c->frame.waited.predicted_display_time_ns = predicted_display_time_ns;
*out_frame_id = frame_id;
*out_wake_time_ns = wake_up_time_ns;
*out_predicted_gpu_time_ns = desired_present_time_ns; // Not quite right but close enough.
*out_predicted_display_time_ns = predicted_display_time_ns;
*out_predicted_display_period_ns = interval_ns;
return XRT_SUCCESS;
}

其中comp_target_calc_frame_timings的调用又是一个wrap的封装:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
monado\src\xrt\compositor\main\comp_target.h
// cts->base.calc_frame_timings = comp_target_swapchain_calc_frame_timings;
static inline void
comp_target_calc_frame_timings(struct comp_target *ct,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_desired_present_time_ns,
uint64_t *out_present_slop_ns,
uint64_t *out_predicted_display_time_ns)
{
......
ct->calc_frame_timings( //
ct, //
out_frame_id, //
out_wake_up_time_ns, //
out_desired_present_time_ns, //
out_present_slop_ns, //
out_predicted_display_time_ns); //
}

所以,实际就是调用到了:comp_target_swapchain_calc_frame_timings:

真正的计算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
monado\src\xrt\compositor\main\comp_target_swapchain.c
static void
comp_target_swapchain_calc_frame_timings(struct comp_target *ct,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_desired_present_time_ns,
uint64_t *out_present_slop_ns,
uint64_t *out_predicted_display_time_ns)
{
struct comp_target_swapchain *cts = (struct comp_target_swapchain *)ct;
int64_t frame_id = -1;
uint64_t wake_up_time_ns = 0;
uint64_t desired_present_time_ns = 0;
uint64_t present_slop_ns = 0;
uint64_t predicted_display_time_ns = 0;
uint64_t predicted_display_period_ns = 0;
uint64_t min_display_period_ns = 0;
u_pc_predict(cts->upc, //
&frame_id, //
&wake_up_time_ns, //
&desired_present_time_ns, //
&present_slop_ns, //
&predicted_display_time_ns, //
&predicted_display_period_ns, //
&min_display_period_ns); //
cts->current_frame_id = frame_id;
*out_frame_id = frame_id;
*out_wake_up_time_ns = wake_up_time_ns;
*out_desired_present_time_ns = desired_present_time_ns;
*out_predicted_display_time_ns = predicted_display_time_ns;
*out_present_slop_ns = present_slop_ns;
}

层层封装,u_pc_predict还是一个wrap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
monado\src\xrt\auxiliary\util\u_pacing.h
//只能说在Overview中留了一手,struct fake_timing *ft = U_TYPED_CALLOC(struct fake_timing);
//其中fake_timing是u_pacing_compositor的子类,所以这里用到的upc,实际:
//ft->base.predict = pc_predict;
//使用fake timing的原因也很简单:
// // The display timing code hasn't been tested on Android and may be broken.
// comp_target_swapchain_init_and_set_fnptrs(&w->base, COMP_TARGET_FORCE_FAKE_DISPLAY_TIMING);
static inline void
u_pc_predict(struct u_pacing_compositor *upc,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_desired_present_time_ns,
uint64_t *out_present_slop_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns,
uint64_t *out_min_display_period_ns)
{
upc->predict(upc, //
out_frame_id, //
out_wake_up_time_ns, //
out_desired_present_time_ns, //
out_present_slop_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns, //
out_min_display_period_ns); //
}

运行到这里,什么都还没开始呢,我们重新来整理一下整个流程。

  • Compositor Thread中发起了一次wait_frame
  • 接着是调用到了comp_compositor::predict_frame,实际函数:compositor_predict_frame
  • 接着是调用到了comp_target::calc_frame_timings,实际函数:comp_target_swapchain_calc_frame_timings
  • 最后是调用到了u_pacing_compositor::predict,实际函数是:pc_predict
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
monado\src\xrt\auxiliary\util\u_pacing_compositor_fake.c
static void
pc_predict(struct u_pacing_compositor *upc,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_desired_present_time_ns,
uint64_t *out_present_slop_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns,
uint64_t *out_min_display_period_ns)
{
struct fake_timing *ft = fake_timing(upc);
int64_t frame_id = ft->frame_id_generator++;
// 1.绕了一个大圈,其实就是在这里获取的predicted_display_time_ns
uint64_t predicted_display_time_ns = predict_next_frame(ft);
uint64_t desired_present_time_ns = predicted_display_time_ns - ft->present_offset_ns;
uint64_t wake_up_time_ns = desired_present_time_ns - ft->app_time_ns;
uint64_t present_slop_ns = U_TIME_HALF_MS_IN_NS;
uint64_t predicted_display_period_ns = ft->frame_period_ns;
uint64_t min_display_period_ns = ft->frame_period_ns;
*out_frame_id = frame_id;
*out_wake_up_time_ns = wake_up_time_ns;
*out_desired_present_time_ns = desired_present_time_ns;
*out_present_slop_ns = present_slop_ns;
*out_predicted_display_time_ns = predicted_display_time_ns;
*out_predicted_display_period_ns = predicted_display_period_ns;
*out_min_display_period_ns = min_display_period_ns;
}

最终的实现时,又是调用的predict_next_frame

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
monado\src\xrt\auxiliary\util\u_pacing_compositor_fake.c
static uint64_t
predict_next_frame(struct fake_timing *ft)
{
uint64_t time_needed_ns = ft->present_offset_ns + ft->app_time_ns;
uint64_t now_ns = os_monotonic_get_ns();
uint64_t predicted_display_time_ns = ft->last_display_time_ns + ft->frame_period_ns;
while (now_ns + time_needed_ns > predicted_display_time_ns) {
predicted_display_time_ns += ft->frame_period_ns;
}
return predicted_display_time_ns;
}

我们在结合一下初始化fake_timing初始化的地方:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
monado\src\xrt\auxiliary\util\u_pacing_compositor_fake.c
xrt_result_t
u_pc_fake_create(uint64_t estimated_frame_period_ns, struct u_pacing_compositor **out_uft)
{
struct fake_timing *ft = U_TYPED_CALLOC(struct fake_timing);
ft->base.predict = pc_predict;
ft->base.mark_point = pc_mark_point;
ft->base.info = pc_info;
ft->base.destroy = pc_destroy;
//1.ft->frame_period_ns = 16.6ms
ft->frame_period_ns = estimated_frame_period_ns;
ft->frame_id_generator = 5;
//2.ft->present_offset_ns = 4ms,这个参数表示的是屏幕开始点亮到显示的一个延迟
ft->present_offset_ns = U_TIME_1MS_IN_NS * 4;
//3.ft->app_time_ns = 16.6 * 0.2 = 3.2ms,这个参数表示app端render需要多久(这里app其实是compositor thread)
ft->app_time_ns = get_percent_of_time(estimated_frame_period_ns, 20);
//4.ft->last_display_time_ns = now + 50ms,假定第一次显示是在50ms以后
ft->last_display_time_ns = os_monotonic_get_ns() + U_TIME_1MS_IN_NS * 50.0;
......
return XRT_SUCCESS;
}

好了,这边就是拆解到底了,我们再反过来看看整个计算的过程。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
static uint64_t
predict_next_frame(struct fake_timing *ft)
{
//1. time_needed_ns = 4 + 3.2 = 7.2ms
uint64_t time_needed_ns = ft->present_offset_ns + ft->app_time_ns;
uint64_t now_ns = os_monotonic_get_ns();
//2. predicted_display_time_ns = 初始化时的假值 + 16.6ms
uint64_t predicted_display_time_ns = ft->last_display_time_ns + ft->frame_period_ns;
//3. 如果predicted_display_time_ns的值小于(now_ns + time_needed_ns),那么说明显示的时间已经错过了
// 需要不断增加interval去找最近的一个时间点了。
while (now_ns + time_needed_ns > predicted_display_time_ns) {
predicted_display_time_ns += ft->frame_period_ns;
}
//4.最后获取到的predicted_display_time_ns,其实是恰好大于(now + time_needed_ns)的一个帧间时间。
return predicted_display_time_ns;
}

获取到了predict_next_frame的时间,那么再来看其他的参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
monado\src\xrt\auxiliary\util\u_pacing_compositor_fake.c
static void
pc_predict(struct u_pacing_compositor *upc,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_desired_present_time_ns,
uint64_t *out_present_slop_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns,
uint64_t *out_min_display_period_ns)
{
struct fake_timing *ft = fake_timing(upc);
int64_t frame_id = ft->frame_id_generator++;
// 1.绕了一个大圈,其实就是在这里获取的predicted_display_time_ns
uint64_t predicted_display_time_ns = predict_next_frame(ft);
// 以上已经获取到了predicted_display_time_ns,是恰好大于(now + time_needed_ns)的一个值
// 2.desired_present_time_ns中会减去:ft->present_offset_ns,所以
// desired_present_time_ns是包含了render时间,但是不包括亮屏偏移,其实就是内容已经ready的时间。
uint64_t desired_present_time_ns = predicted_display_time_ns - ft->present_offset_ns;
// 3.wake_up_time_ns,这个时间又减去了ft->app_time_ns,这个意思就是在wake_up_time_ns这个时间点
// 开始干活,就可以恰好赶上整个显示的节奏,最迟不能晚于这个时间,不然就是来不及
uint64_t wake_up_time_ns = desired_present_time_ns - ft->app_time_ns;
// 4.present_slop_ns = 0.5ms,
uint64_t present_slop_ns = U_TIME_HALF_MS_IN_NS;
// 5.predicted_display_period_ns,帧间间隔
uint64_t predicted_display_period_ns = ft->frame_period_ns;
// 6.min_display_period_ns,显示器最小的显示间隔
uint64_t min_display_period_ns = ft->frame_period_ns;
*out_frame_id = frame_id;
*out_wake_up_time_ns = wake_up_time_ns;
*out_desired_present_time_ns = desired_present_time_ns;
*out_present_slop_ns = present_slop_ns;
*out_predicted_display_time_ns = predicted_display_time_ns;
*out_predicted_display_period_ns = predicted_display_period_ns;
*out_min_display_period_ns = min_display_period_ns;
}

中间省略掉一些by pass的过程,我们直接到compositor_predict_frame,其中out_min_display_period_ns,out_predicted_display_period_ns在半路就被丢弃了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
monado\src\xrt\compositor\main\comp_compositor.c
static xrt_result_t
compositor_predict_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
struct comp_compositor *c = comp_compositor(xc);
uint64_t interval_ns = (int64_t)c->settings.nominal_frame_interval_ns;
......
comp_target_calc_frame_timings( //
c->target, //
&frame_id, //
&wake_up_time_ns, //
&desired_present_time_ns, //
&present_slop_ns, //
&predicted_display_time_ns); //
......
// 1. fake_timing的id是从5开始的,实际只是做一个计数作用
c->frame.waited.id = frame_id;
// 2. desired_present_time_ns表示的是去掉了present_offset_ns的值,也就是希望render开始的时间点
c->frame.waited.desired_present_time_ns = desired_present_time_ns;
// 3. 这个值还没明白具体的作用,但数值是0.5ms
c->frame.waited.present_slop_ns = present_slop_ns;
// 4. predicted_display_time_ns是真正开始后显示的实现,是恰好大于(now + time_needed_ns)的一个值
c->frame.waited.predicted_display_time_ns = predicted_display_time_ns;
*out_frame_id = frame_id;
// 5. wake_up_time_ns是desired_present_time_ns又减去了ft->app_time_ns(interval * 0.2),开始工作的时间
*out_wake_time_ns = wake_up_time_ns;
// 6.这边monado认为desired_present_time_ns是gpu开始工作的时间,看注释也是一个估算的值
*out_predicted_gpu_time_ns = desired_present_time_ns; // Not quite right but close enough.
// 7.predicted_display_time和predicted_display_period就是预测显示出来的时间点,以及下一次的间隔
*out_predicted_display_time_ns = predicted_display_time_ns;
*out_predicted_display_period_ns = interval_ns;
return XRT_SUCCESS;
}

继续回溯:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
monado\src\xrt\compositor\multi\comp_multi_system.c
static void
wait_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
......
xrt_comp_predict_frame( //
xc, //
&frame_id, //
&wake_up_time_ns, //
out_predicted_gpu_time_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns); //
uint64_t now_ns = os_monotonic_get_ns();
// 1. 由于我们已经知道了最迟开始工作的时间是:wake_up_time_ns,所以没有必要提前开始工作,就直接sleep了
if (now_ns < wake_up_time_ns) {
os_nanosleep(wake_up_time_ns - now_ns);
}
......
*out_frame_id = frame_id;
*out_wake_time_ns = wake_up_time_ns;
}

带着以上的信息,回到Compositor Thread的主循环中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
monado\src\xrt\compositor\multi\comp_multi_system.c
static int
multi_main_loop(struct multi_system_compositor *msc)
{
......
struct xrt_compositor *xc = &msc->xcn->base;
......
os_thread_helper_lock(&msc->oth);
while (os_thread_helper_is_running_locked(&msc->oth)) {
.......
wait_frame( //
xc, //
&frame_id, //
&wake_time_ns, //
&predicted_gpu_time_ns, //
&predicted_display_time_ns, //
&predicted_display_period_ns); //
uint64_t now_ns = os_monotonic_get_ns();
// 1. diff_ns是预测显示的时间和当前时间的差值
uint64_t diff_ns = predicted_display_time_ns - now_ns;
// 2. 更新到client端
broadcast_timings(msc, predicted_display_time_ns, predicted_display_period_ns, diff_ns);
......
os_thread_helper_lock(&msc->oth);
}
......
return 0;
}

下面的部分就简单了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
monado\src\xrt\compositor\multi\comp_multi_system.c
static void
broadcast_timings(struct multi_system_compositor *msc,
uint64_t predicted_display_time_ns,
uint64_t predicted_display_period_ns,
uint64_t diff_ns)
{
......
for (size_t i = 0; i < ARRAY_SIZE(msc->clients); i++) {
struct multi_compositor *mc = msc->clients[i];
......
u_pa_info( //
mc->upa, //
predicted_display_time_ns, //
predicted_display_period_ns, //
diff_ns); //
}
//把last_timings的三个参数都填完,compositor thread端的时间就算是算完了。
msc->last_timings.predicted_display_time_ns = predicted_display_time_ns;
msc->last_timings.predicted_display_period_ns = predicted_display_period_ns;
msc->last_timings.diff_ns = diff_ns;
......
}

再去看其中wrap的函数u_pa_info,是一个空实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
monado\src\xrt\auxiliary\util\u_pacing_compositor_fake.c
static void
pc_info(struct u_pacing_compositor *upc,
int64_t frame_id,
uint64_t desired_present_time_ns,
uint64_t actual_present_time_ns,
uint64_t earliest_present_time_ns,
uint64_t present_margin_ns)
{
/*
* The compositor might call this function because it selected the
* fake timing code even tho displaying timing is available.
*/
}

以上就是Compositor Thread端做更新的一个流程了,后续其实就是在循环中不断迭代着。

Client端

Client端的流程实际跟Compositor Thread差不多。

初始化

Client端的数据结构之前也介绍过了:

  • multi_compositor
  • u_pacing_app

有了之前的经验,其实在Client端我们就看两个地方,就是上面这两个数据结构的初始化:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
xrt_result_t
multi_compositor_create(struct multi_system_compositor *msc,
const struct xrt_session_info *xsi,
struct xrt_compositor_native **out_xcn)
{
......
// 1.创建client端的multi_compositor
struct multi_compositor *mc = U_TYPED_CALLOC(struct multi_compositor);
......
mc->base.base.wait_frame = multi_compositor_wait_frame;
......
// 2. 创建u_pacing_app
u_pa_create(&mc->upa);
......
// 3.使用multi_system_compositor的last_timing来对u_pacing_app对象赋值
u_pa_info( //
mc->upa, //
msc->last_timings.predicted_display_time_ns, //
msc->last_timings.predicted_display_period_ns, //
msc->last_timings.diff_ns); //
......
return XRT_SUCCESS;
}

我们先看一下u_pa_create

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
monado\src\xrt\auxiliary\util\u_pacing_app.c
xrt_result_t
u_pa_create(struct u_pacing_app **out_urt)
{
struct pacing_app *pa = U_TYPED_CALLOC(struct pacing_app);
pa->base.predict = pa_predict;
......
pa->base.info = pa_info;
......
// 1. cpu的消耗时间:2ms
pa->app.cpu_time_ns = U_TIME_1MS_IN_NS * 2;
// 2. draw的消耗时间:2ms
pa->app.draw_time_ns = U_TIME_1MS_IN_NS * 2;
// 3. margin时间,2ms
pa->app.margin_ns = U_TIME_1MS_IN_NS * 2;
......
return XRT_SUCCESS;
}

然后是u_pa_info,很自然这个又是一个wrap函数,真实的函数未:pa_info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
monado\src\xrt\auxiliary\util\u_pacing_app.c
// 1.这个参数是在Compositor Thread中,预测的未来显示时间,是(now + time_needed_ns)以后最接近interval的一个值
// 实际也就是(now + ft->present_offset_ns + ft->app_time_ns),now + 4 + 3.2ms
// msc->last_timings.predicted_display_time_ns,
// 2.帧间间隔
// msc->last_timings.predicted_display_period_ns,
// 3.结束wait frame时,sleep结束后,得到下一帧的显示到当前时间的一个时间差
// uint64_t diff_ns = predicted_display_time_ns - now_ns;
// msc->last_timings.diff_ns);
static void
pa_info(struct u_pacing_app *upa,
uint64_t predicted_display_time_ns,
uint64_t predicted_display_period_ns,
uint64_t extra_ns)
{
struct pacing_app *pa = pacing_app(upa);
pa->last_input.predicted_display_time_ns = predicted_display_time_ns;
pa->last_input.predicted_display_period_ns = predicted_display_period_ns;
pa->last_input.extra_ns = extra_ns;
}

更新流程

带着这些信息,我看再来看xrWaitFrame的返回,wrap的部分我们都直接略过了,这个部分可以参考《Loader & Broker》以及《Out Of Process》两篇文档,我们直接来到oxr_xrWaitFrame

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
monado\src\xrt\state_trackers\oxr\oxr_session.c
XrResult
oxr_session_frame_wait(struct oxr_logger *log, struct oxr_session *sess, XrFrameState *frameState)
{
.......
CALL_CHK(xrt_comp_wait_frame(xc, &sess->frame_id.waited, &predicted_display_time, &predicted_display_period));
......
frameState->shouldRender = should_render(sess->state);
frameState->predictedDisplayPeriod = predicted_display_period;
frameState->predictedDisplayTime =
time_state_monotonic_to_ts_ns(sess->sys->inst->timekeeping, predicted_display_time);
......
if (sess->frame_timing_wait_sleep_ms > 0) {
os_nanosleep(U_TIME_1MS_IN_NS * sess->frame_timing_wait_sleep_ms);
}
return oxr_session_success_result(sess);
}

然后这个xrt_comp_wait_frame其实还是一个wrap,实际的函数:ipc_compositor_wait_frame

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
static xrt_result_t
ipc_compositor_wait_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_predicted_display_time,
uint64_t *out_predicted_display_period)
{
IPC_TRACE_MARKER();
struct ipc_client_compositor *icc = ipc_client_compositor(xc);
uint64_t wake_up_time_ns = 0;
IPC_CALL_CHK(ipc_call_compositor_predict_frame(icc->ipc_c, // Connection
out_frame_id, // Frame id
&wake_up_time_ns, // When we should wake up
out_predicted_display_time, // Display time
out_predicted_display_period)); // Current period
uint64_t now_ns = os_monotonic_get_ns();
// Lets hope its not to late.
if (wake_up_time_ns <= now_ns) {
res = ipc_call_compositor_wait_woke(icc->ipc_c, *out_frame_id);
return res;
}
const uint64_t _1ms_in_ns = 1000 * 1000;
const uint64_t measured_scheduler_latency_ns = 50 * 1000;
// Within one ms, just release the app right now.
if (wake_up_time_ns - _1ms_in_ns <= now_ns) {
res = ipc_call_compositor_wait_woke(icc->ipc_c, *out_frame_id);
return res;
}
// This is how much we should sleep.
uint64_t diff_ns = wake_up_time_ns - now_ns;
// A minor tweak that helps hit the time better.
diff_ns -= measured_scheduler_latency_ns;
os_nanosleep(diff_ns);
res = ipc_call_compositor_wait_woke(icc->ipc_c, *out_frame_id);
......
return res;
}

然后看ipc_call_compositor_predict_frame,我们直接看Server端吧:ipc_handle_compositor_predict_frame

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
xrt_result_t
ipc_handle_compositor_predict_frame(volatile struct ipc_client_state *ics,
int64_t *out_frame_id,
uint64_t *out_wake_up_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
IPC_TRACE_MARKER();
if (ics->xc == NULL) {
return XRT_ERROR_IPC_SESSION_NOT_CREATED;
}
/*
* We use this to signal that the session has started, this is needed
* to make this client/session active/visible/focused.
*/
ipc_server_activate_session(ics);
uint64_t gpu_time_ns = 0;
return xrt_comp_predict_frame( //
ics->xc, //
out_frame_id, //
out_wake_up_time_ns, //
&gpu_time_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns); //
}

一模一样的,xrt_comp_predict_frame依旧是个wrap,实际的函数为:multi_compositor_predict_frame

真正的计算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
monado\src\xrt\compositor\multi\comp_multi_compositor.c
static xrt_result_t
multi_compositor_predict_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_wake_time_ns,
uint64_t *out_predicted_gpu_time_ns,
uint64_t *out_predicted_display_time_ns,
uint64_t *out_predicted_display_period_ns)
{
......
u_pa_predict( //
mc->upa, //
out_frame_id, //
out_wake_time_ns, //
out_predicted_display_time_ns, //
out_predicted_display_period_ns); //
......
}

u_pa_predict还是一个wrap,实际的函数为:pa_predict

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
monado\src\xrt\auxiliary\util\u_pacing_app.c
static void
pa_predict(struct u_pacing_app *upa,
int64_t *out_frame_id,
uint64_t *out_wake_up_time,
uint64_t *out_predicted_display_time,
uint64_t *out_predicted_display_period)
{
struct pacing_app *pa = pacing_app(upa);
int64_t frame_id = ++pa->frame_counter;
*out_frame_id = frame_id;
uint64_t period_ns = calc_period(pa);
uint64_t predict_ns = predict_display_time(pa, period_ns);
// When should the client wake up.
uint64_t wake_up_time_ns = predict_ns - total_app_and_compositor_time_ns(pa);
// When the client should deliver the frame to us.
uint64_t delivery_time_ns = predict_ns - total_compositor_time_ns(pa);
pa->last_returned_ns = predict_ns;
*out_wake_up_time = wake_up_time_ns;
*out_predicted_display_time = predict_ns;
*out_predicted_display_period = period_ns;
size_t index = GET_INDEX_FROM_ID(pa, frame_id);
pa->frames[index].state = U_RT_PREDICTED;
pa->frames[index].frame_id = frame_id;
pa->frames[index].predicted_delivery_time_ns = delivery_time_ns;
pa->frames[index].predicted_display_period_ns = period_ns;
pa->frames[index].when.predicted_ns = os_monotonic_get_ns();
}

所以,我们展开这个函数看一看,就可以得出最后的结论了:

  • uint64_t period_ns = calc_period(pa);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
//根据之前在u_app_create中初始化时带入的信息,这边pa->last_input.predicted_display_period_ns
//其实就等于msc->last_timings.predicted_display_period_ns, 也即16.6ms
static uint64_t
min_period(const struct pacing_app *pa)
{
return pa->last_input.predicted_display_period_ns;
}
static uint64_t
calc_period(const struct pacing_app *pa)
{
// 1.获得base_period_ns = 16.6ms
uint64_t base_period_ns = min_period(pa);
......
uint64_t period_ns = base_period_ns;
// 2.实际这边是在计算真实的period,如果cpu运行的时间大过了一个基本的period
// 那么就说明需要两个甚至多个base_period才可以完成实际的cpu计算
// pa->app.cpu_time_ns = 2ms,这个是在初始化时候设置的
while (pa->app.cpu_time_ns > period_ns) {
period_ns += base_period_ns;
}
// 3.这个跟cpu time类似,实际初始化时:pa->app.draw_time_ns = 2ms
while (pa->app.draw_time_ns > period_ns) {
period_ns += base_period_ns;
}
// 4.所以我们基本的period就是16.6ms
return period_ns;
}
  • uint64_t predict_ns = predict_display_time(pa, period_ns);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
// 1.我们先看pa->last_input.extra_ns,这个其实是
// “3.结束wait frame时,sleep结束后,得到下一帧的显示到当前时间的一个时间差
// uint64_t diff_ns = predicted_display_time_ns - now_ns;”
// 这个时间其实是在Compositor Thread那边计算出来的,是Compositor的cpu开始工作的时间点到显示的一个差值。
// 这个差值+pa->app.margin_ns,也就是2ms,这边认为是compositor可以用来做compositor的一个时间(最大compositor时间)
static uint64_t
total_compositor_time_ns(const struct pacing_app *pa)
{
return pa->app.margin_ns + pa->last_input.extra_ns;
}
// 应用端会耗费的时间:cpu+gpu = 4ms
static uint64_t
total_app_time_ns(const struct pacing_app *pa)
{
return pa->app.cpu_time_ns + pa->app.draw_time_ns;
}
// 上一次Compositor Thread预测显示上屏的时间
static uint64_t
last_sample_displayed(const struct pacing_app *pa)
{
return pa->last_input.predicted_display_time_ns;
}
// 上一次应用端预测上屏的时间
static uint64_t
last_return_predicted_display(const struct pacing_app *pa)
{
return pa->last_returned_ns;
}
// 总共app端和compositor端会花费的时间
static uint64_t
total_app_and_compositor_time_ns(const struct pacing_app *pa)
{
return total_app_time_ns(pa) + total_compositor_time_ns(pa);
}
//带着以上的基础变量和认知,我们来看
static uint64_t
predict_display_time(const struct pacing_app *pa, uint64_t period_ns)
{
// 1.当前的时刻T1
uint64_t now_ns = os_monotonic_get_ns();
// 2.App和Compositor Thread一共会消耗的时间:t1
uint64_t app_and_compositor_time_ns = total_app_and_compositor_time_ns(pa);
// 3.上一帧画面在compositor thread端的预测上屏的时间TC0
uint64_t val = last_sample_displayed(pa);
// 4.上一帧画面在app端预测上屏的时间TA0,如果TC0 < TA0,
// 那么说明App还没处理完,在Compositor Thread那边是来不及的,需要拍到下一个周期了。
// 我们认为新的这个val为T2时刻。
while (val <= last_return_predicted_display(pa)) {
val += period_ns;
}
// 5.对于已经知晓的时刻T2,减去app端和Compositor Thread的时间花费,如果比T1的时刻小,
// 那么说明从T1时刻开始的这一次处理,在T2是没办法显示的,所以要顺延若干个帧间间隔
while ((val - app_and_compositor_time_ns) <= now_ns) {
val += period_ns;
}
// 6.通过以上的算法,我们最终会得到一个T3(可能是等于T2的),也就是app端预测的上屏时间。
return val;
}
  • uint64_t wake_up_time_ns = predict_ns - total_app_and_compositor_time_ns(pa);
    • 这里的wake_up_time_ns跟Compositor Thread的意思其实是一样的,就是app可以先等到这个时刻,再开始做事。
  • uint64_t delivery_time_ns = predict_ns - total_compositor_time_ns(pa);
    • delivery_time_ns其实也好理解,就是app最晚在什么时候,需要把render的动作做完,并告知Server。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
static void
pa_predict(struct u_pacing_app *upa,
int64_t *out_frame_id,
uint64_t *out_wake_up_time,
uint64_t *out_predicted_display_time,
uint64_t *out_predicted_display_period)
{
struct pacing_app *pa = pacing_app(upa);
......
uint64_t period_ns = calc_period(pa);
uint64_t predict_ns = predict_display_time(pa, period_ns);
uint64_t wake_up_time_ns = predict_ns - total_app_and_compositor_time_ns(pa);
uint64_t delivery_time_ns = predict_ns - total_compositor_time_ns(pa);
pa->last_returned_ns = predict_ns;
*out_wake_up_time = wake_up_time_ns;
*out_predicted_display_time = predict_ns;
*out_predicted_display_period = period_ns;
......
pa->frames[index].state = U_RT_PREDICTED;
pa->frames[index].frame_id = frame_id;
pa->frames[index].predicted_delivery_time_ns = delivery_time_ns;
pa->frames[index].predicted_display_period_ns = period_ns;
pa->frames[index].when.predicted_ns = os_monotonic_get_ns();
}

其实到这里,整个Client端的时间计算也就结束了,一个小的注意点是,pa的部分会存储前一帧的数据。

1
2
3
4
5
6
struct pacing_app
{
......
struct u_pa_frame frames[2];
......
}

然后带着:

  • out_wake_up_time
  • out_predicted_display_time
  • out_predicted_display_period

再看看RPC的后半段:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
static xrt_result_t
ipc_compositor_wait_frame(struct xrt_compositor *xc,
int64_t *out_frame_id,
uint64_t *out_predicted_display_time,
uint64_t *out_predicted_display_period)
{
......
IPC_CALL_CHK(ipc_call_compositor_predict_frame(icc->ipc_c, // Connection
out_frame_id, // Frame id
&wake_up_time_ns, // When we should wake up
out_predicted_display_time, // Display time
out_predicted_display_period)); // Current period
//我们带着三个参数回来了:out_wake_up_time,out_predicted_display_time,out_predicted_display_period
uint64_t now_ns = os_monotonic_get_ns();
// 1.如果当前的时间已经超过了醒来的时间,那么说明整个回来有点晚了,赶紧做下面的事情
if (wake_up_time_ns <= now_ns) {
res = ipc_call_compositor_wait_woke(icc->ipc_c, *out_frame_id);
return res;
}
const uint64_t _1ms_in_ns = 1000 * 1000;
const uint64_t measured_scheduler_latency_ns = 50 * 1000;
// 2.如果回来的时间来不及睡了,那么也赶紧做后面的事情吧
if (wake_up_time_ns - _1ms_in_ns <= now_ns) {
res = ipc_call_compositor_wait_woke(icc->ipc_c, *out_frame_id);
return res;
}
// 3.回来的时候还有点剩余,需要做一个sleep,这边需要计算一下sleep的具体时间
uint64_t diff_ns = wake_up_time_ns - now_ns;
// 略微考虑一些调度的损耗
diff_ns -= measured_scheduler_latency_ns;
// 4.Sleep
os_nanosleep(diff_ns);
......
return res;
}

TODO

  • [ ] 由于在实际代码中Monado使用的fake_timing,另外还有一套display_timing,这套会跟Vulkan的一些api有关

  • [ ] has_GOOGLE_display_timing

  • [ ] 实际QCOM XR2平台下,OpenXR中的一些Timing是怎么样的

  • [ ] ATW中是否有Timing相关的任务?